A note on extension theorems and its connection to universal consistency in machine learning

نویسندگان

  • Andreas Christmann
  • Florian Dumpert
  • Dao-Hong Xiang
چکیده

Statistical machine learning plays an important role in modern statistics and computer science. One main goal of statistical machine learning is to provide universally consistent algorithms, i.e., the estimator converges in probability or in some stronger sense to the Bayes risk or to the Bayes decision function. Kernel methods based on minimizing the regularized risk over a reproducing kernel Hilbert space (RKHS) belong to these statistical machine learning methods. It is in general unknown which kernel yields optimal results for a particular data set or for the unknown probability measure. Hence various kernel learning methods were proposed to choose the kernel and therefore also its RKHS in a data adaptive manner. Nevertheless, many practitioners often use the classical Gaussian RBF kernel or certain Sobolev kernels with good success. The goal of this short note is to offer one possible theoretical explanation for this empirical fact.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Mobile Learning for Transforming Education and Improving Learning Outcomes on Agriculture in India

The teledensity in India is estimated at 74.50 per cent (January, 2014) with an increase in subscriber base each day. No other revolution in the mankind has transformed the communication scenario to the extent as done by the mobile technologies. India has the fastest growing telecom network in the world with its high population and development potential base. Education is at a critical juncture...

متن کامل

Consistency of Random Forests and Other Averaging Classifiers

In the last years of his life, Leo Breiman promoted random forests for use in classification. He suggested using averaging as a means of obtaining good discrimination rules. The base classifiers used for averaging are simple and randomized, often based on random samples from the data. He left a few questions unanswered regarding the consistency of such rules. In this paper, we give a number of ...

متن کامل

Learning from Examples as an Inverse Problem

Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algorithms suc...

متن کامل

Vasectomy in Mouse Model Using Electrosurgery Machine

Vasectomy in laboratory animals is a crucial step in the production of surrogate female mice. The surrogate mothers play a key role in successful embryo transfer, most important steps for the production of transgenic animal models, investigation of the preimplantation embryo development, and revitalization of cryopreserved strains. Abdominal and scrotal surgeries are common surgical proce...

متن کامل

Strong Limit Theorems for the Bayesian Scoring Criterion in Bayesian Networks

In the machine learning community, the Bayesian scoring criterion is widely used for model selection problems. One of the fundamental theoretical properties justifying the usage of the Bayesian scoring criterion is its consistency. In this paper we refine this property for the case of binomial Bayesian network models. As a by-product of our derivations we establish strong consistency and obtain...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016